-
Notifications
You must be signed in to change notification settings - Fork 239
Lower bandwidth used for topology refresh #5618
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
53c2e7e
to
e7d88c3
Compare
The latest updates on your projects. Learn more about Vercel for Git ↗︎
2 Skipped Deployments
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work! Would be good to have @jstuczyn to quickly skim it as well though
/// Topology Provider build around a cached piecewise provider that uses the Nym API to | ||
/// fetch changes and node details. | ||
#[derive(Clone)] | ||
pub struct NymApiTopologyProvider { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about using something like NymApiSmartTopologyProvider
? Having two types with the same name was a bit confusing initially when reading the the code
pub fn gateways(&self) -> impl Iterator<Item = &RoutingNode> { | ||
self.node_details.values().filter(|n| { | ||
self.rewarded_set.entry_gateways.contains(&n.node_id) | ||
|| self.rewarded_set.exit_gateways.contains(&n.node_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be good to confirm with @jstuczyn that this is correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, it's not. but it's easy to miss because we've been going back and forth about this for quite a while. the current consensus is that you can use any node that supports gateway mode regardless if its in the active set or not
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
few notes:
-
Only descriptors for node IDs in the new layer assignments that are NOT in the known topology cache are pulled.
sometimes you might have to pull information about nodes more often than that. regardless if they're known or not because they might have changed their internal configuration (like started using different mix port)
- you have to be pulling all gateways, not just the ones in the rewarded set, as clients are allowed to use any of them at any time
pub fn gateways(&self) -> impl Iterator<Item = &RoutingNode> { | ||
self.node_details.values().filter(|n| { | ||
self.rewarded_set.entry_gateways.contains(&n.node_id) | ||
|| self.rewarded_set.exit_gateways.contains(&n.node_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, it's not. but it's easy to miss because we've been going back and forth about this for quite a while. the current consensus is that you can use any node that supports gateway mode regardless if its in the active set or not
@@ -231,6 +231,8 @@ impl PacketPreparer { | |||
mixnet_entry: false, | |||
mixnet_exit: false, | |||
}, | |||
// We have no information about performance in legacy node formats | |||
performance: Percent::hundred(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
at this point I guess we should probably use performance of zero if nodes are still outdated...
pub(super) async fn nodes_basic_batch( | ||
state: State<AppState>, | ||
Query(query_params): Query<NodesParamsWithRole>, | ||
Json(ids): Json<Vec<u32>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there should be some limit
on the number of nodes you can request (because certain people might complain it could lead to DoS)
Prevents re-pulling node descriptors that are already known while using the
TopologyRefresher
What does this do
node_id
TopologyProvider
that caches topologies and only goes to network if the known topology is older than the configured cache ttl.Performance
field to the storedRoutingNode
in the Cached topology. This way the cache contains all node descriptors and queries simply filter based on the stored performance, as opposed to filtering before storing and downloading more descriptors.What does this not do
MixnetClient
still uses an independentTopologyProvider
and cached topology. This PR does not create a shared topology provider that can be used for every new connection.TopologyRefresher
is created as aMixnetCient
is starting up. This means that the initial topology pull doesn't happen until we have already started the process of connecting to the mixnet.This change is